Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38530714

RESUMO

Pulmonary nodules may be an early manifestation of lung cancer, the leading cause of cancer-related deaths among both men and women. Numerous studies have established that deep learning methods can yield high-performance levels in the detection of lung nodules in chest X-rays. However, the lack of gold-standard public datasets slows down the progression of the research and prevents benchmarking of methods for this task. To address this, we organized a public research challenge, NODE21, aimed at the detection and generation of lung nodules in chest X-rays. While the detection track assesses state-of-the-art nodule detection systems, the generation track determines the utility of nodule generation algorithms to augment training data and hence improve the performance of the detection systems. This paper summarizes the results of the NODE21 challenge and performs extensive additional experiments to examine the impact of the synthetically generated nodule training images on the detection algorithm performance.

2.
Sci Rep ; 13(1): 10120, 2023 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-37344565

RESUMO

Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiplos , Nódulo Pulmonar Solitário , Humanos , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Pulmão , Nódulos Pulmonares Múltiplos/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem
3.
IEEE Trans Biomed Eng ; 70(9): 2690-2699, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37030809

RESUMO

Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35±0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.


Assuntos
Aprendizado Profundo , Radioterapia Guiada por Imagem , Movimento (Física) , Ultrassonografia/métodos , Ultrassonografia de Intervenção , Radioterapia Guiada por Imagem/métodos
4.
Sci Rep ; 13(1): 506, 2023 01 10.
Artigo em Inglês | MEDLINE | ID: mdl-36627354

RESUMO

Robotic assistance in minimally invasive surgery offers numerous advantages for both patient and surgeon. However, the lack of force feedback in robotic surgery is a major limitation, and accurately estimating tool-tissue interaction forces remains a challenge. Image-based force estimation offers a promising solution without the need to integrate sensors into surgical tools. In this indirect approach, interaction forces are derived from the observed deformation, with learning-based methods improving accuracy and real-time capability. However, the relationship between deformation and force is determined by the stiffness of the tissue. Consequently, both deformation and local tissue properties must be observed for an approach applicable to heterogeneous tissue. In this work, we use optical coherence tomography, which can combine the detection of tissue deformation with shear wave elastography in a single modality. We present a multi-input deep learning network for processing of local elasticity estimates and volumetric image data. Our results demonstrate that accounting for elastic properties is critical for accurate image-based force estimation across different tissue types and properties. Joint processing of local elasticity information yields the best performance throughout our phantom study. Furthermore, we test our approach on soft tissue samples that were not present during training and show that generalization to other tissue properties is possible.


Assuntos
Técnicas de Imagem por Elasticidade , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Fenômenos Mecânicos , Procedimentos Cirúrgicos Robóticos/métodos , Elasticidade , Imagens de Fantasmas , Técnicas de Imagem por Elasticidade/métodos , Tomografia de Coerência Óptica
5.
Int J Comput Assist Radiol Surg ; 17(11): 2131-2139, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35597846

RESUMO

OBJECTIVES: Motion compensation is an interesting approach to improve treatments of moving structures. For example, target motion can substantially affect dose delivery in radiation therapy, where methods to detect and mitigate the motion are widely used. Recent advances in fast, volumetric ultrasound have rekindled the interest in ultrasound for motion tracking. We present a setup to evaluate ultrasound based motion tracking and we study the effect of imaging rate and motion artifacts on its performance. METHODS: We describe an experimental setup to acquire markerless 4D ultrasound data with precise ground truth from a robot and evaluate different real-world trajectories and system settings toward accurate motion estimation. We analyze motion artifacts in continuously acquired data by comparing to data recorded in a step-and-shoot fashion. Furthermore, we investigate the trade-off between the imaging frequency and resolution. RESULTS: The mean tracking errors show that continuously acquired data leads to similar results as data acquired in a step-and-shoot fashion. We report mean tracking errors up to 2.01 mm and 1.36 mm on the continuous data for the lower and higher resolution, respectively, while step-and-shoot data leads to mean tracking errors of 2.52 mm and 0.98 mm. CONCLUSIONS: We perform a quantitative analysis of different system settings for motion tracking with 4D ultrasound. We can show that precise tracking is feasible and additional motion in continuously acquired data does not impair the tracking. Moreover, the analysis of the frequency resolution trade-off shows that a high imaging resolution is beneficial in ultrasound tracking.


Assuntos
Artefatos , Diagnóstico por Imagem , Humanos , Movimento (Física) , Imagens de Fantasmas , Ultrassonografia/métodos
6.
IEEE Trans Biomed Eng ; 69(11): 3356-3364, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35439123

RESUMO

Ultrasound shear wave elasticity imaging is a valuable tool for quantifying the elastic properties of tissue. Typically, the shear wave velocity is derived and mapped to an elasticity value, which neglects information such as the shape of the propagating shear wave or push sequence characteristics. We present 3D spatio-temporal CNNs for fast local elasticity estimation from ultrasound data. This approach is based on retrieving elastic properties from shear wave propagation within small local regions. A large training data set is acquired with a robot from homogeneous gelatin phantoms ranging from 17.42 kPa to 126.05 kPa with various push locations. The results show that our approach can estimate elastic properties on a pixelwise basis with a mean absolute error of 5.01(437) kPa. Furthermore, we estimate local elasticity independent of the push location and can even perform accurate estimates inside the push region. For phantoms with embedded inclusions, we report a 53.93% lower MAE (7.50 kPa) and on the background of 85.24% (1.64 kPa) compared to a conventional shear wave method. Overall, our method offers fast local estimations of elastic properties with small spatio-temporal window sizes.


Assuntos
Aprendizado Profundo , Técnicas de Imagem por Elasticidade , Técnicas de Imagem por Elasticidade/métodos , Gelatina , Imagens de Fantasmas , Elasticidade
7.
J Biophotonics ; 15(3): e202100167, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34889065

RESUMO

Currently, there are no fast and accurate screening methods available for head and neck cancer, the eighth most common tumor entity. For this study, we used hyperspectral imaging, an imaging technique for quantitative and objective surface analysis, combined with deep learning methods for automated tissue classification. As part of a prospective clinical observational study, hyperspectral datasets of laryngeal, hypopharyngeal and oropharyngeal mucosa were recorded in 98 patients before surgery in vivo. We established an automated data interpretation pathway that can classify the tissue into healthy and tumorous using convolutional neural networks with 2D spatial or 3D spatio-spectral convolutions combined with a state-of-the-art Densenet architecture. Using 24 patients for testing, our 3D spatio-spectral Densenet classification method achieves an average accuracy of 81%, a sensitivity of 83% and a specificity of 79%.


Assuntos
Aprendizado Profundo , Neoplasias de Cabeça e Pescoço , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Imageamento Hiperespectral , Redes Neurais de Computação , Estudos Prospectivos
8.
Int J Comput Assist Radiol Surg ; 16(9): 1413-1423, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34251654

RESUMO

PURPOSE: Brain Magnetic Resonance Images (MRIs) are essential for the diagnosis of neurological diseases. Recently, deep learning methods for unsupervised anomaly detection (UAD) have been proposed for the analysis of brain MRI. These methods rely on healthy brain MRIs and eliminate the requirement of pixel-wise annotated data compared to supervised deep learning. While a wide range of methods for UAD have been proposed, these methods are mostly 2D and only learn from MRI slices, disregarding that brain lesions are inherently 3D and the spatial context of MRI volumes remains unexploited. METHODS: We investigate whether using increased spatial context by using MRI volumes combined with spatial erasing leads to improved unsupervised anomaly segmentation performance compared to learning from slices. We evaluate and compare 2D variational autoencoder (VAE) to their 3D counterpart, propose 3D input erasing, and systemically study the impact of the data set size on the performance. RESULTS: Using two publicly available segmentation data sets for evaluation, 3D VAEs outperform their 2D counterpart, highlighting the advantage of volumetric context. Also, our 3D erasing methods allow for further performance improvements. Our best performing 3D VAE with input erasing leads to an average DICE score of 31.40% compared to 25.76% for the 2D VAE. CONCLUSIONS: We propose 3D deep learning methods for UAD in brain MRI combined with 3D erasing and demonstrate that 3D methods clearly outperform their 2D counterpart for anomaly segmentation. Also, our spatial erasing method allows for further performance improvements and reduces the requirement for large data sets.


Assuntos
Aprendizado Profundo , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Neuroimagem
9.
Med Image Anal ; 64: 101730, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32492583

RESUMO

Estimating the forces acting between instruments and tissue is a challenging problem for robot-assisted minimally-invasive surgery. Recently, numerous vision-based methods have been proposed to replace electro-mechanical approaches. Moreover, optical coherence tomography (OCT) and deep learning have been used for estimating forces based on deformation observed in volumetric image data. The method demonstrated the advantage of deep learning with 3D volumetric data over 2D depth images for force estimation. In this work, we extend the problem of deep learning-based force estimation to 4D spatio-temporal data with streams of 3D OCT volumes. For this purpose, we design and evaluate several methods extending spatio-temporal deep learning to 4D which is largely unexplored so far. Furthermore, we provide an in-depth analysis of multi-dimensional image data representations for force estimation, comparing our 4D approach to previous, lower-dimensional methods. Also, we analyze the effect of temporal information and we study the prediction of short-term future force values, which could facilitate safety features. For our 4D force estimation architectures, we find that efficient decoupling of spatial and temporal processing is advantageous. We show that using 4D spatio-temporal data outperforms all previously used data representations with a mean absolute error of 10.7 mN. We find that temporal information is valuable for force estimation and we demonstrate the feasibility of force prediction.


Assuntos
Aprendizado Profundo , Tomografia de Coerência Óptica , Humanos
10.
Int J Comput Assist Radiol Surg ; 15(6): 943-952, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32445128

RESUMO

PURPOSE: Localizing structures and estimating the motion of a specific target region are common problems for navigation during surgical interventions. Optical coherence tomography (OCT) is an imaging modality with a high spatial and temporal resolution that has been used for intraoperative imaging and also for motion estimation, for example, in the context of ophthalmic surgery or cochleostomy. Recently, motion estimation between a template and a moving OCT image has been studied with deep learning methods to overcome the shortcomings of conventional, feature-based methods. METHODS: We investigate whether using a temporal stream of OCT image volumes can improve deep learning-based motion estimation performance. For this purpose, we design and evaluate several 3D and 4D deep learning methods and we propose a new deep learning approach. Also, we propose a temporal regularization strategy at the model output. RESULTS: Using a tissue dataset without additional markers, our deep learning methods using 4D data outperform previous approaches. The best performing 4D architecture achieves an correlation coefficient (aCC) of 98.58% compared to 85.0% of a previous 3D deep learning method. Also, our temporal regularization strategy at the output further improves 4D model performance to an aCC of 99.06%. In particular, our 4D method works well for larger motion and is robust toward image rotations and motion distortions. CONCLUSIONS: We propose 4D spatio-temporal deep learning for OCT-based motion estimation. On a tissue dataset, we find that using 4D information for the model input improves performance while maintaining reasonable inference times. Our regularization strategy demonstrates that additional temporal information is also beneficial at the model output.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Tomografia de Coerência Óptica , Algoritmos , Desenho de Equipamento , Humanos , Movimento (Física) , Procedimentos Cirúrgicos Robóticos , Fatores de Tempo , Distribuição Tecidual
11.
Int J Comput Assist Radiol Surg ; 14(11): 1837-1845, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31129859

RESUMO

PURPOSE: The gold standard for colorectal cancer metastases detection in the peritoneum is histological evaluation of a removed tissue sample. For feedback during interventions, real-time in vivo imaging with confocal laser microscopy has been proposed for differentiation of benign and malignant tissue by manual expert evaluation. Automatic image classification could improve the surgical workflow further by providing immediate feedback. METHODS: We analyze the feasibility of classifying tissue from confocal laser microscopy in the colon and peritoneum. For this purpose, we adopt both classical and state-of-the-art convolutional neural networks to directly learn from the images. As the available dataset is small, we investigate several transfer learning strategies including partial freezing variants and full fine-tuning. We address the distinction of different tissue types, as well as benign and malignant tissue. RESULTS: We present a thorough analysis of transfer learning strategies for colorectal cancer with confocal laser microscopy. In the peritoneum, metastases are classified with an AUC of 97.1, and in the colon the primarius is classified with an AUC of 73.1. In general, transfer learning substantially improves performance over training from scratch. We find that the optimal transfer learning strategy differs for models and classification tasks. CONCLUSIONS: We demonstrate that convolutional neural networks and transfer learning can be used to identify cancer tissue with confocal laser microscopy. We show that there is no generally optimal transfer learning strategy and model as well as task-specific engineering is required. Given the high performance for the peritoneum, even with a small dataset, application for intraoperative decision support could be feasible.


Assuntos
Neoplasias do Colo/classificação , Aprendizado Profundo , Microscopia Confocal/métodos , Redes Neurais de Computação , Neoplasias Peritoneais/secundário , Neoplasias do Colo/diagnóstico , Neoplasias do Colo/secundário , Estudos de Viabilidade , Humanos , Metástase Neoplásica , Neoplasias Peritoneais/diagnóstico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...